Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 6.284
1.
J Transl Med ; 22(1): 429, 2024 May 06.
Article En | MEDLINE | ID: mdl-38711123

BACKGROUND: Previous literature has explored the relationship between chronic atrophic gastritis (CAG) and isolated cancers within the upper gastrointestinal cancers; However, an integrative synthesis across the totality of upper gastrointestinal cancers was conspicuously absent. The research objective was to assess the relationship between CAG and the risk of incident upper gastrointestinal cancers, specifically including gastric cancer, oesophageal cancer, and oesophagogastric junction cancer. METHODS: Rigorous systematic searches were conducted across three major databases, namely PubMed, Embase and Web of Science, encompassing the timeline from database inception until August 10, 2023. We extracted the necessary odds ratio (OR) and their corresponding 95% confidence interval (CI) for subsequent meta-analysis. Statistical analyses were conducted using Stata 17.0 software. RESULTS: This meta-analysis included a total of 23 articles encompassing 5858 patients diagnosed with upper gastrointestinal cancers. CAG resulted in a statistically significant 4.12-fold elevated risk of incident gastric cancer (OR = 4.12, 95% CI 3.20-5.30). Likewise, CAG was linked to a 2.08-fold increased risk of incident oesophageal cancer (OR = 2.08, 95%CI 1.60-2.72). Intriguingly, a specific correlation was found between CAG and the risk of incident oesophageal squamous cell carcinoma (OR = 2.29, 95%CI 1.77-2.95), while no significant association was detected for oesophageal adenocarcinoma (OR = 0.62, 95%CI 0.17-2.26). Moreover, CAG was correlated with a 2.77-fold heightened risk of oesophagogastric junction cancer (OR = 2.77, 95%CI 2.21-3.46). Notably, for the same type of upper gastrointestinal cancer, it was observed that diagnosing CAG through histological methods was linked to a 33-77% higher risk of developing cancer compared to diagnosing CAG through serological methods. CONCLUSION: This meta-analysis indicated a two- to fourfold increased risk of gastric cancer, oesophageal cancer, and oesophagogastric junction cancer in patients with CAG. Importantly, for the same upper gastrointestinal cancer, the risk of incident cancer was higher when CAG was diagnosed histologically compared to serological diagnosis. Further rigorous study designs are required to explore the impact of CAG diagnosed through both diagnostic methods on the risk of upper gastrointestinal cancers.


Gastritis, Atrophic , Gastrointestinal Neoplasms , Humans , Gastritis, Atrophic/complications , Gastritis, Atrophic/epidemiology , Risk Factors , Gastrointestinal Neoplasms/epidemiology , Gastrointestinal Neoplasms/pathology , Chronic Disease , Incidence , Esophageal Neoplasms/epidemiology , Esophageal Neoplasms/pathology , Stomach Neoplasms/epidemiology , Stomach Neoplasms/pathology , Male , Odds Ratio , Female , Publication Bias
4.
Stat Methods Med Res ; 33(3): 359-375, 2024 Mar.
Article En | MEDLINE | ID: mdl-38460950

Simulation studies are commonly used to evaluate the performance of newly developed meta-analysis methods. For methodology that is developed for an aggregated data meta-analysis, researchers often resort to simulation of the aggregated data directly, instead of simulating individual participant data from which the aggregated data would be calculated in reality. Clearly, distributional characteristics of the aggregated data statistics may be derived from distributional assumptions of the underlying individual data, but they are often not made explicit in publications. This article provides the distribution of the aggregated data statistics that were derived from a heteroscedastic mixed effects model for continuous individual data and a procedure for directly simulating the aggregated data statistics. We also compare our simulation approach with other simulation approaches used in literature. We describe their theoretical differences and conduct a simulation study for three meta-analysis methods: DerSimonian and Laird method for pooling aggregated study effect sizes and the Trim & Fill and precision-effect test and precision-effect estimate with standard errors method for adjustment of publication bias. We demonstrate that the choice of simulation model for aggregated data may have an impact on (the conclusions of) the performance of the meta-analysis method. We recommend the use of multiple aggregated data simulation models to investigate the sensitivity in the performance of the meta-analysis method. Additionally, we recommend that researchers try to make the individual participant data model explicit and derive from this model the distributional consequences of the aggregated statistics to help select appropriate aggregated data simulation models.


Publication Bias , Humans , Computer Simulation , Bias
6.
J Sch Psychol ; 103: 101294, 2024 Apr.
Article En | MEDLINE | ID: mdl-38432736

Recent psychological research suggests that many published studies cannot be replicated (e.g., Open Science Collaboration, 2015). The inability to replicate results suggests that there are influences and biases in the publication process that encourage publication of unusual-rather than representative-results, and that also discourage independent replication of published studies. A brief discussion of the ways in which publication bias and professional incentives may distort the research literature in school psychology is contrasted against the importance of replications and preregistration of research (i.e., registered reports) as self-correcting mechanisms for research in school psychology. The limitations of current practices, coupled with the importance of registered reports and replications as self-correcting mechanisms, provide the context for this ongoing initiative in the Journal of School Psychology. Processes for manuscript submission, review, and publication are presented to encourage researchers to preregister studies and submit replications for publication.


Pre-Registration Publication , Schools , Humans , Publication Bias
7.
J Hazard Mater ; 471: 133690, 2024 Jun 05.
Article En | MEDLINE | ID: mdl-38336580

Some narratives present biodegradable plastic use for soil mulching practices in agriculture as "environmentally friendly" and "sustainable" alternatives to conventional plastics. To verify these narratives, environmental research recently started focusing on their potential impact on soil health, highlighting some concerns. The paper by Degli-Innocenti criticizes this unfolding knowledge arguing that it is affected by communication hypes, alarmistic writing and a focus on exposure scenarios purposedly crafted to yield negative effects. The quest of scientists for increased impact - the paper concludes - is the driver of such behavior. As scholars devoted to the safeguarding of scientific integrity, we set to verify whether this serious claim is grounded in evidence. Through a bibliometric analysis (using number of paper reads, citations and mentions on social media to measure the impact of publications) we found that: i) the papers pointed out by Degli-Innocenti as examples of biased works do not score higher than the median of similar publications; ii) the methodology used to support the conclusion is non-scientific; and iii) the paper does not fulfil the requirements concerning disclosure of conflicts of interests. We conclude that this paper represents a non-scientific opinion, potentially biased by a conflict of interest. We ask the paper to be clearly tagged as such, after the necessary corrections on the ethic section have been made. That being said, the paper does offer some useful insights for the definition of exposure scenarios in risk assessment. We comment and elaborate on these proposed models, hoping that this can help to advance the field.


Publication Bias , Biodegradable Plastics/chemistry
8.
Res Synth Methods ; 15(3): 500-511, 2024 May.
Article En | MEDLINE | ID: mdl-38327122

Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine.


Economics , Meta-Analysis as Topic , Psychology , Publication Bias , Humans , Ecology , Research Design , Selection Bias , Probability , Medicine
9.
Curr Med Res Opin ; 40(3): 493-503, 2024 03.
Article En | MEDLINE | ID: mdl-38354123

Plain language resources (PLR) are lay summaries of clinical trial results or plain language summaries of publications, in digital/visual/language formats. They aim to provide accurate information in jargon-free, and easy-to-understand language that can meet the health information needs of the general public, especially patients and caregivers. These are typically developed by the study sponsors or investigators, or by national public health bodies, research hospitals, patient organizations, and non-profit organizations. While the usefulness of PLR seems unequivocal, they have never been analyzed from the perspective of ethics. In this commentary, we do so and reflect on whether PLR are categorically advantageous or if they solve certain issues but raise new problems at the same time. Ethical concerns that PLR can potentially address include but are not limited to individual and community level health literacy, patient empowerment and autonomy. We also highlight the ethical issues that PLR may potentially exacerbate, such as fair balanced presentation and interpretation of medical knowledge, positive publication bias, and equitable access to information. PLR are important resources for patients, with promising implications for individual as well as community health. However, they require appropriate oversight and standards to optimize their potential value. Hence, we also highlight recommendations and best practices from our reading of the literature, that aim to minimize these biases.


Plain language resources (PLR) are a way to make medical research information easier for everyone to understand.They can be summaries of clinical trial results, articles, or presentations. PLR can also be made as videos, brochures, or infographics.They can help patients understand their health better and take care of themselves. However, there are some things to be careful about.PLR may only report the good results and not mention the negative ones, which could be biased.Also, some people with disabilities or who don't speak the language well might have a hard time understanding PLR.To make sure PLR are helpful and fair, there should be standard guidelines for how they are made and shared. This will make sure that PLR are useful and don't cause any problems.


Language , Publishing , Humans , Publication Bias , Clinical Trials as Topic
10.
Eur J Gastroenterol Hepatol ; 36(4): 351-358, 2024 Apr 01.
Article En | MEDLINE | ID: mdl-38407898

The systematic review aimed to assess the risks of metabolic dysfunction-associated steatotic liver disease (MASLD) on all-cause and cause-specific mortality in patients with type 2 diabetes (T2DM). EMBASE and MEDLINE were searched from inception to June 2022 for observational studies examining the relationship between MASLD and the risk of mortality among T2DM patients. Meta-analysis was conducted using random-effects models with hazard ratios (HRs) to quantify the risk of mortality. A total of 5877 articles were screened, and ultimately, 12 eligible studies encompassing 368 528 T2DM patients, with a median follow-up of 8.9 years (interquartile range, 4.7-14.5), were included. Our analysis revealed a significant association between MASLD and an increased risk of all-cause mortality in T2DM patients [HR 1.28; 95% confidence interval (CI), 1.05-1.58; I 2  = 90%]. Meta-regression analyses did not show significant effects of mean age, mean BMI, and percentage of smokers, hypertension, and hyperlipidemia on the association between MASLD and the risk of all-cause mortality. However, we found that MASLD was not significantly associated with mortality related to cardiovascular diseases (HR 1.05; 95% CI, 0.82-1.35; I2  = 0%) or cancer (HR 1.21; 95% CI, 0.41-3.51; I 2  = 79%) among patients with T2DM. No publication bias was observed. This comprehensive meta-analysis provides substantial evidence supporting a significant association between MASLD and an increased risk of all-cause mortality among the T2DM population. These findings underscore the potential benefits of screening for MASLD in T2DM patients, aiding in the early identification of high-risk individuals and enabling risk modification strategies to improve survival.


Cardiovascular Diseases , Diabetes Mellitus, Type 2 , Fatty Liver , Hypertension , Humans , Diabetes Mellitus, Type 2/complications , Publication Bias
11.
PLoS One ; 19(2): e0297075, 2024.
Article En | MEDLINE | ID: mdl-38359021

Previously observed negative correlations between sample size and effect size (n-ES correlation) in psychological research have been interpreted as evidence for publication bias and related undesirable biases. Here, we present two studies aimed at better understanding to what extent negative n-ES correlations reflect such biases or might be explained by unproblematic adjustments of sample size to expected effect sizes. In Study 1, we analysed n-ES correlations in 150 meta-analyses from cognitive, organizational, and social psychology and in 57 multiple replications, which are free from relevant biases. In Study 2, we used a random sample of 160 psychology papers to compare the n-ES correlation for effects that are central to these papers and effects selected at random from these papers. n-ES correlations proved inconspicuous in meta-analyses. In line with previous research, they do not suggest that publication bias and related biases have a strong impact on meta-analyses in psychology. A much higher n-ES correlation emerged for publications' focal effects. To what extent this should be attributed to publication bias and related biases remains unclear.


Psychology, Social , Bias , Publication Bias , Sample Size , Meta-Analysis as Topic
12.
Neurology ; 102(6): e208032, 2024 Mar 26.
Article En | MEDLINE | ID: mdl-38408286

BACKGROUND AND OBJECTIVES: Outcome reporting bias occurs when publication of trial results is dependent on clinical significance, thereby threatening the validity of trial results. Research on immunomodulatory drugs in multiple sclerosis has thrived in recent years. We aim to comprehensively examine to what extent outcome reporting bias is present in these trials and the possible underlying factors. METHODS: We identified clinical trials evaluating the efficacy and safety of immunomodulatory drugs in patients with multiple sclerosis (MS) registered in ClinicalTrials.gov after September 2007 and completed before the end of 2018. Information about study design, type of funding, and primary and secondary outcome measures was extracted from the registry. Timing of registration in relation to study initiation and subsequent amendments to the planned outcomes were reviewed. Publications related to these trials were identified in several bibliographic databases using the trial registration number. Registered primary and secondary outcomes were recorded for each trial and compared with outcomes in the publication describing the main outcomes of the trial. RESULTS: A search of ClinicalTrials.gov identified 535 eligible registered clinical trials; of these, 101 had a matching publication. Discrepancies between registered and published primary and secondary outcomes were found in 95% of the trials, including discrepancies between the registered and published primary outcomes in 26 publications. Forty-four percent of the published secondary outcomes were not included in the registry. A similar proportion of registered and nonregistered reported primary efficacy outcomes were positive (favoring the intervention). Nonindustry-funded and open-label trials in MS were more prone to selective primary outcome reporting, although these findings did not reach statistical significance. Only two-thirds of the trials were registered in ClinicalTrials.gov before the trial start date, and 62% of trials made amendments in registered outcomes during or after the trial period. DISCUSSION: Selective outcome reporting is prevalent in trials of disease-modifying drugs in people with MS. We propose methods to diminish the occurrence of this bias in future research.


Multiple Sclerosis , Humans , Publication Bias , Multiple Sclerosis/drug therapy , Research Design , Registries , Immunomodulating Agents
13.
Syst Rev ; 13(1): 52, 2024 02 03.
Article En | MEDLINE | ID: mdl-38310288

BACKGROUND: Several studies have explored the effects of ill health and health shocks on labour supply. However, there are very few systematic reviews and meta-analyses in this area. The current work aims to fill this gap by undertaking a systematic review and meta-analysis on the effects of ill health and health shocks on labour supply. METHODS: We searched using EconLit and MEDLINE databases along with grey literature to identify relevant papers for the analysis. Necessary information was extracted from the papers using an extraction tool. We calculated partial correlations to determine effect sizes and estimated the overall effect sizes by using the random effects model. Sub-group analyses were conducted based on geography, publication year and model type to assess the sources of heterogeneity. Model type entailed distinguishing articles that used the standard ordinary least squares (OLS) technique from those that used other estimation techniques such as quasi-experimental methods, including propensity score matching and difference-in-differences methodologies. Multivariate and univariate meta-regressions were employed to further examine the sources of heterogeneity. Moreover, we tested for publication bias by using a funnel plot, Begg's test and the trim and fill methodology. RESULTS: We found a negative and statistically significant pooled estimate of the effect of ill health and health shocks on labour supply (partial r = -0.05, p < .001). The studies exhibited substantial heterogeneity. Sample size, geography, model type and publication year were found to be significant sources of heterogeneity. The funnel plot, and the trim and fill methodology, when imputed on the left showed some level of publication bias, but this was contrasted by both the Begg's test, and the trim and fill methodology when imputed on the right. CONCLUSION: The study examined the effects of ill health and health shocks on labour supply. We found negative statistically significant pooled estimates pertaining to the overall effect of ill health and health shocks on labour supply including in sub-groups. Empirical studies on the effects of ill- health and health shocks on labour supply have oftentimes found a negative relationship. Our meta-analysis results, which used a large, combined sample size, seem to reliably confirm the finding.


Research Design , Humans , Sample Size , Publication Bias , Workforce , Databases, Factual
14.
BMJ Ment Health ; 27(1)2024 Feb 12.
Article En | MEDLINE | ID: mdl-38350669

QUESTION: We examined the effect of study characteristics, risk of bias and publication bias on the efficacy of pharmacotherapy in randomised controlled trials (RCTs) for obsessive-compulsive disorder (OCD). STUDY SELECTION AND ANALYSIS: We conducted a systematic search of double-blinded, placebo-controlled, short-term RCTs with selective serotonergic reuptake inhibitors (SSRIs) or clomipramine. We performed a random-effect meta-analysis using change in the Yale-Brown Obsessive-Compulsive Scale (YBOCS) as the primary outcome. We performed meta-regression for risk of bias, intervention, sponsor status, number of trial arms, use of placebo run-in, dosing, publication year, age, severity, illness duration and gender distribution. Furthermore, we analysed publication bias using a Bayesian selection model. FINDINGS: We screened 3729 articles and included 21 studies, with 4102 participants. Meta-analysis showed an effect size of -0.59 (Hedges' G, 95% CI -0.73 to -0.46), equalling a 4.2-point reduction in the YBOCS compared with placebo. The most recent trial was performed in 2007 and most trials were at risk of bias. We found an indication for publication bias, and subsequent correction for this bias resulted in a depleted effect size. In our meta-regression, we found that high risk of bias was associated with a larger effect size. Clomipramine was more effective than SSRIs, even after correcting for risk of bias. After correction for multiple testing, other selected predictors were non-significant. CONCLUSIONS: Our findings reveal superiority of clomipramine over SSRIs, even after adjusting for risk of bias. Effect sizes may be attenuated when considering publication bias and methodological rigour, emphasising the importance of robust studies to guide clinical utility of OCD pharmacotherapy. PROSPERO REGISTRATION NUMBER: CRD42023394924.


Clomipramine , Obsessive-Compulsive Disorder , Humans , Clomipramine/therapeutic use , Selective Serotonin Reuptake Inhibitors/therapeutic use , Publication Bias , Obsessive-Compulsive Disorder/drug therapy , Randomized Controlled Trials as Topic
15.
Res Synth Methods ; 15(3): 483-499, 2024 May.
Article En | MEDLINE | ID: mdl-38273211

As traditionally conceived, publication bias arises from selection operating on a collection of individually unbiased estimates. A canonical form of such selection across studies (SAS) is the preferential publication of affirmative studies (i.e., those with significant, positive estimates) versus nonaffirmative studies (i.e., those with nonsignificant or negative estimates). However, meta-analyses can also be compromised by selection within studies (SWS), in which investigators "p-hack" results within their study to obtain an affirmative estimate. Published estimates can then be biased even conditional on affirmative status, which comprises the performance of existing methods that only consider SAS. We propose two new analysis methods that accommodate joint SAS and SWS; both analyze only the published nonaffirmative estimates. First, we propose estimating the underlying meta-analytic mean by fitting "right-truncated meta-analysis" (RTMA) to the published nonaffirmative estimates. This method essentially imputes the entire underlying distribution of population effects. Second, we propose conducting a standard meta-analysis of only the nonaffirmative studies (MAN); this estimate is conservative (negatively biased) under weakened assumptions. We provide an R package (phacking) and website (metabias.io). Our proposed methods supplement existing methods by assessing the robustness of meta-analyses to joint SAS and SWS.


Algorithms , Meta-Analysis as Topic , Models, Statistical , Publication Bias , Humans , Research Design , Data Interpretation, Statistical , Software , Reproducibility of Results , Computer Simulation
16.
PLoS Biol ; 22(1): e3002423, 2024 Jan.
Article En | MEDLINE | ID: mdl-38190355

Power analysis currently dominates sample size determination for experiments, particularly in grant and ethics applications. Yet, this focus could paradoxically result in suboptimal study design because publication biases towards studies with the largest effects can lead to the overestimation of effect sizes. In this Essay, we propose a paradigm shift towards better study designs that focus less on statistical power. We also advocate for (pre)registration and obligatory reporting of all results (regardless of statistical significance), better facilitation of team science and multi-institutional collaboration that incorporates heterogenization, and the use of prospective and living meta-analyses to generate generalizable results. Such changes could make science more effective and, potentially, more equitable, helping to cultivate better collaborations.


Research Design , Prospective Studies , Sample Size , Publication Bias
17.
Syst Rev ; 13(1): 11, 2024 01 02.
Article En | MEDLINE | ID: mdl-38169404

INTRODUCTION: One concern in meta-analyses is the presence of publication bias (PB) which leads to the dissemination of inflated results. In this study, we assessed how much the meta-analyses in the field of otorhinolaryngology in 2021 evaluated the presence of PB. METHODS: Six of the most influential journals in the field were selected. A search was conducted, and data were extracted from the included studies. In cases where PB was not assessed by the authors, we evaluated the risk of its presence by designing funnel plots and performing statistical tests. RESULTS: Seventy-five systematic reviews were included. Fifty-one percent of them used at least one method for assessing the risk of PB, with the visual inspection of a funnel plot being the most frequent method used. Twenty-nine percent of the studies reported a high risk of PB presence. We replicated the results of 11 meta-analyses that did not assess the risk of PB and found that 63.6% were at high risk. We also found that a considerable proportion of the systematic reviews that found a high risk of PB did not take it into consideration when making conclusions and discussing their results. DISCUSSION: Our results indicate that systematic reviews published in some of the most influential journals in the field do not implement enough measures in their search strategies to reduce the risk of PB, nor do they assess the risk of its presence or take the risk of its presence into consideration when inferring their results.


Publication Bias , Humans , Bias
18.
BMC Med Res Methodol ; 24(1): 9, 2024 Jan 11.
Article En | MEDLINE | ID: mdl-38212714

BACKGROUND: Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials. METHODS: The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles. Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article. RESULTS: The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint-article RCTs were included in the analysis. The median (interquartile range) delay between preprint and journal article was 121 (73-187) days, the median sample size was 150 (71-464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as 'some concern' for 80% of RCTs. We found that 81 preprint-article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found. CONCLUSIONS: Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.


COVID-19 , Peer Review, Research , Preprints as Topic , Publication Bias , Humans , Randomized Controlled Trials as Topic , Systematic Reviews as Topic
19.
Biom J ; 66(1): e2200102, 2024 Jan.
Article En | MEDLINE | ID: mdl-36642800

When comparing the performance of two or more competing tests, simulation studies commonly focus on statistical power. However, if the size of the tests being compared are either different from one another or from the nominal size, comparing tests based on power alone may be misleading. By analogy with diagnostic accuracy studies, we introduce relative positive and negative likelihood ratios to factor in both power and size in the comparison of multiple tests. We derive sample size formulas for a comparative simulation study. As an example, we compared the performance of six statistical tests for small-study effects in meta-analyses of randomized controlled trials: Begg's rank correlation, Egger's regression, Schwarzer's method for sparse data, the trim-and-fill method, the arcsine-Thompson test, and Lin and Chu's combined test. We illustrate that comparing power alone, or power adjusted or penalized for size, can be misleading, and how the proposed likelihood ratio approach enables accurate comparison of the trade-off between power and size between competing tests.


Publication Bias , Computer Simulation , Sample Size
20.
Psychol Med ; 54(3): 437-446, 2024 Feb.
Article En | MEDLINE | ID: mdl-37947238

Delay discounting-the extent to which individuals show a preference for smaller immediate rewards over larger delayed rewards-has been proposed as a transdiagnostic neurocognitive process across mental health conditions, but its examination in relation to posttraumatic stress disorder (PTSD) is comparatively recent. To assess the aggregated evidence for elevated delay discounting in relation to posttraumatic stress, we conducted a meta-analysis on existing empirical literature. Bibliographic searches identified 209 candidate articles, of which 13 articles with 14 independent effect sizes were eligible for meta-analysis, reflecting a combined sample size of N = 6897. Individual study designs included case-control (e.g. examination of differences in delay discounting between individuals with and without PTSD) and continuous association studies (e.g. relationship between posttraumatic stress symptom severity and delay discounting). In a combined analysis of all studies, the overall relationship was a small but statistically significant positive association between posttraumatic stress and delay discounting (r = .135, p < .0001). The same relationship was statistically significant for continuous association studies (r = .092, p = .027) and case-control designs (r = .179, p < .001). Evidence of publication bias was minimal. The included studies were limited in that many did not concurrently incorporate other psychiatric conditions in the analyses, leaving the specificity of the relationship to posttraumatic stress less clear. Nonetheless, these findings are broadly consistent with previous meta-analyses of delayed reward discounting in relation to other mental health conditions and provide further evidence for the transdiagnostic utility of this construct.


Delay Discounting , Problem Behavior , Stress Disorders, Post-Traumatic , Humans , Reward , Publication Bias
...